48 research outputs found

    Achievement goal orientation profiles and performance in a programming MOOC

    Get PDF
    It has been suggested that performance goals focused on appearing talented (appearance goals) and those focused on outperforming others (normative goals) have different consequences, for example, regarding performance. Accordingly, applying this distinction into appearance and normative goals alongside mastery goals, this study explores what kinds of achievement goal orientation profiles are identified among over 2000 students participating in an introductory programming MOOC. Using Two-Step cluster analysis, five distinct motivational profiles are identified. Course performance and demographics of students with different goal orientation profiles are mostly similar. Students with Combined Mastery and Performance Goals perform slightly better than students with Low Goals. The observations are largely in line with previous studies conducted in different contexts. The differentiation of appearance and normative performance goals seemed to yield meaningful motivational profiles, but further studies are needed to establish their relevance and investigate whether this information can be used to improve teaching.Peer reviewe

    Many bioinformatics programming tasks can be automated with ChatGPT

    Full text link
    Computer programming is a fundamental tool for life scientists, allowing them to carry out many essential research tasks. However, despite a variety of educational efforts, learning to write code can be a challenging endeavor for both researchers and students in life science disciplines. Recent advances in artificial intelligence have made it possible to translate human-language prompts to functional code, raising questions about whether these technologies can aid (or replace) life scientists' efforts to write code. Using 184 programming exercises from an introductory-bioinformatics course, we evaluated the extent to which one such model -- OpenAI's ChatGPT -- can successfully complete basic- to moderate-level programming tasks. On its first attempt, ChatGPT solved 139 (75.5%) of the exercises. For the remaining exercises, we provided natural-language feedback to the model, prompting it to try different approaches. Within 7 or fewer attempts, ChatGPT solved 179 (97.3%) of the exercises. These findings have important implications for life-sciences research and education. For many programming tasks, researchers no longer need to write code from scratch. Instead, machine-learning models may produce usable solutions. Instructors may need to adapt their pedagogical approaches and assessment techniques to account for these new capabilities that are available to the general public.Comment: 13 pages, 4 figures, to be submitted for publicatio

    Exploring Personalization of Gamification in an Introductory Programming Course

    Get PDF
    Gamification has been used in introductory programming courses, for example, to increase engagement with study materials, reduce procrastination, and increase attendance to practice sessions. Indeed, with the rapidly growing adoption of digital tools in such courses, the use of various game elements and mechanics to drive participation is increasing. Previous studies on gamification in computing have examined the effects over the whole student population. Prior work in other disciplines has found that the benefits associated with gamification may only be realized for some students, while others may even experience reduced motivation. The Hexad user types survey attempts to tackle this problem by grouping users into six different types for whom gamification should have different effects. The goal is to personalize the game elements for different user types, thus creating gamified experiences more suitable for individual learners. In this work, we study whether the Hexad survey could be used to guide the personalization of gamification in an introductory programming course. Specifically, we examine the quality of students' answers to the Hexad survey and explore whether they can be used to predict students' preferences for enabling gamification in the platform where they complete assignments. In our specific computing education context, we find that classifying students using the Hexad survey does not appear to be an effective approach for the automatic personalization of gamification.Peer reviewe

    Prompt Problems: A New Programming Exercise for the Generative AI Era

    Full text link
    Large Language Models (LLMs) are revolutionizing the field of computing education with their powerful code-generating capabilities. Traditional pedagogical practices have focused on code writing tasks, but there is now a shift in importance towards code reading, comprehension and evaluation of LLM-generated code. Alongside this shift, an important new skill is emerging -- the ability to solve programming tasks by constructing good prompts for code-generating models. In this work we introduce a new type of programming exercise to hone this nascent skill: 'Prompt Problems'. Prompt Problems are designed to help students learn how to write effective prompts for AI code generators. A student solves a Prompt Problem by crafting a natural language prompt which, when provided as input to an LLM, outputs code that successfully solves a specified programming task. We also present a new web-based tool called Promptly which hosts a repository of Prompt Problems and supports the automated evaluation of prompt-generated code. We deploy Promptly for the first time in one CS1 and one CS2 course and describe our experiences, which include student perceptions of this new type of activity and their interactions with the tool. We find that students are enthusiastic about Prompt Problems, and appreciate how the problems engage their computational thinking skills and expose them to new programming constructs. We discuss ideas for the future development of new variations of Prompt Problems, and the need to carefully study their integration into classroom practice.Comment: Accepted to SIGCSE'24. arXiv admin note: substantial text overlap with arXiv:2307.1636

    On the differences between correct student solutions.

    Get PDF
    ABSTRACT We know that students solve problems in different ways, but we know little about the kinds of variation, or the degree of variation between these student generated solutions. In this paper, we propose a taxonomy that classifies the variation between correct student solutions in objective terms, and we show how the application of the taxonomy provides instructors with additional insight about the differences between student solutions. This taxonomy may be used to inform instructors in selecting examples of code for teaching purposes, and provides the possibility of automatically applying the taxonomy to existing solution sets

    Pass Rates in Introductory Programming and in other STEM Disciplines

    Get PDF
    Vast numbers of publications in computing education begin with the premise that programming is hard to learn and hard to teach. Many papers note that failure rates in computing courses, and particularly in introductory programming courses, are higher than their institutions would like. Two distinct research projects in 2007 and 2014 concluded that average success rates in introductory programming courses world-wide were in the region of 67%, and a recent replication of the first project found an average pass rate of about 72%. The authors of those studies concluded that there was little evidence that failure rates in introductory programming were concerningly high. However, there is no absolute scale by which pass or failure rates are measured, so whether a failure rate is concerningly high will depend on what that rate is compared against. As computing is typically considered to be a STEM subject, this paper considers how pass rates for introductory programming courses compare with those for other introductory STEM courses. A comparison of this sort could prove useful in demonstrating whether the pass rates are comparatively low, and if so, how widespread such findings are. This paper is the report of an ITiCSE working group that gathered information on pass rates from several institutions to determine whether prior results can be confirmed, and conducted a detailed comparison of pass rates in introductory programming courses with pass rates in introductory courses in other STEM disciplines. The group found that pass rates in introductory programming courses appear to average about 75%; that there is some evidence that they sit at the low end of the range of pass rates in introductory STEM courses; and that pass rates both in introductory programming and in other introductory STEM courses appear to have remained fairly stable over the past five years. All of these findings must be regarded with some caution, for reasons that are explained in the paper. Despite the lack of evidence that pass rates are substantially lower than in other STEM courses, there is still scope to improve the pass rates of introductory programming courses, and future research should continue to investigate ways of improving student learning in introductory programming courses.Peer reviewe

    "It's Weird That it Knows What I Want": Usability and Interactions with Copilot for Novice Programmers

    Full text link
    Recent developments in deep learning have resulted in code-generation models that produce source code from natural language and code-based prompts with high accuracy. This is likely to have profound effects in the classroom, where novices learning to code can now use free tools to automatically suggest solutions to programming exercises and assignments. However, little is currently known about how novices interact with these tools in practice. We present the first study that observes students at the introductory level using one such code auto-generating tool, Github Copilot, on a typical introductory programming (CS1) assignment. Through observations and interviews we explore student perceptions of the benefits and pitfalls of this technology for learning, present new observed interaction patterns, and discuss cognitive and metacognitive difficulties faced by students. We consider design implications of these findings, specifically in terms of how tools like Copilot can better support and scaffold the novice programming experience.Comment: 26 pages, 2 figures, TOCH

    INTERACT 2015 Adjunct Proceedings. 15th IFIP TC.13 International Conference on Human-Computer Interaction 14-18 September 2015, Bamberg, Germany

    Get PDF
    INTERACT is among the world’s top conferences in Human-Computer Interaction. Starting with the first INTERACT conference in 1990, this conference series has been organised under the aegis of the Technical Committee 13 on Human-Computer Interaction of the UNESCO International Federation for Information Processing (IFIP). This committee aims at developing the science and technology of the interaction between humans and computing devices. The 15th IFIP TC.13 International Conference on Human-Computer Interaction - INTERACT 2015 took place from 14 to 18 September 2015 in Bamberg, Germany. The theme of INTERACT 2015 was "Connection.Tradition.Innovation". This volume presents the Adjunct Proceedings - it contains the position papers for the students of the Doctoral Consortium as well as the position papers of the participants of the various workshops

    A Think-Aloud Study of Novice Debugging

    No full text
    Debugging is a core skill required by programmers, yet we know little about how to effectively teach the process of debugging. The challenges of learning debugging are compounded for novices who lack experience and are still learning the tools they need to program effectively. In this work, we report a case study in which we used a think-aloud protocol to gain insight into the behaviour of three students engaged in debugging tasks. Our qualitative analysis reveals a variety of helpful practices and barriers that limit the effectiveness of debugging. We observe that comprehension, evidence-based activities, and workflow practices all contribute to novice debugging success. Lack of sustained effort, precision, and methodical processes negatively impact debugging effectiveness. We anticipate that understanding how students engage in debugging tasks will aid future work to address ineffective behaviours and promote effective debugging activities
    corecore